Re: [PATCH 1/2] LoongArch: KVM: Protect kvm_check_requests() with SRCU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2024/12/3 下午5:17, Huacai Chen wrote:
On Tue, Dec 3, 2024 at 4:27 PM bibo mao <maobibo@xxxxxxxxxxx> wrote:



On 2024/12/3 下午2:50, Huacai Chen wrote:
When we enable lockdep we get such a warning:

   =============================
   WARNING: suspicious RCU usage
   6.12.0-rc7+ #1891 Tainted: G        W
   -----------------------------
   include/linux/kvm_host.h:1043 suspicious rcu_dereference_check() usage!
   other info that might help us debug this:
   rcu_scheduler_active = 2, debug_locks = 1
   1 lock held by qemu-system-loo/948:
    #0: 90000001184a00a8 (&vcpu->mutex){+.+.}-{4:4}, at: kvm_vcpu_ioctl+0xf4/0xe20 [kvm]
   stack backtrace:
   CPU: 0 UID: 0 PID: 948 Comm: qemu-system-loo Tainted: G        W          6.12.0-rc7+ #1891
   Tainted: [W]=WARN
   Hardware name: Loongson Loongson-3A5000-7A1000-1w-CRB/Loongson-LS3A5000-7A1000-1w-CRB, BIOS vUDK2018-LoongArch-V2.0.0-prebeta9 10/21/2022
   Stack : 0000000000000089 9000000005a0db9c 90000000071519c8 900000012c578000
           900000012c57b920 0000000000000000 900000012c57b928 9000000007e53788
           900000000815bcc8 900000000815bcc0 900000012c57b790 0000000000000001
           0000000000000001 4b031894b9d6b725 0000000004dec000 90000001003299c0
           0000000000000414 0000000000000001 000000000000002d 0000000000000003
           0000000000000030 00000000000003b4 0000000004dec000 90000001184a0000
           900000000806d000 9000000007e53788 00000000000000b4 0000000000000004
           0000000000000004 0000000000000000 0000000000000000 9000000107baf600
           9000000008916000 9000000007e53788 9000000005924778 0000000010000044
           00000000000000b0 0000000000000004 0000000000000000 0000000000071c1d
           ...
   Call Trace:
   [<9000000005924778>] show_stack+0x38/0x180
   [<90000000071519c4>] dump_stack_lvl+0x94/0xe4
   [<90000000059eb754>] lockdep_rcu_suspicious+0x194/0x240
   [<ffff8000022143bc>] kvm_gfn_to_hva_cache_init+0xfc/0x120 [kvm]
   [<ffff80000222ade4>] kvm_pre_enter_guest+0x3a4/0x520 [kvm]
   [<ffff80000222b3dc>] kvm_handle_exit+0x23c/0x480 [kvm]

Fix it by protecting kvm_check_requests() with SRCU.

Cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: Huacai Chen <chenhuacai@xxxxxxxxxxx>
---
   arch/loongarch/kvm/vcpu.c | 4 +++-
   1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index cab1818be68d..d18a4a270415 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -240,7 +240,7 @@ static void kvm_late_check_requests(struct kvm_vcpu *vcpu)
    */
   static int kvm_enter_guest_check(struct kvm_vcpu *vcpu)
   {
-     int ret;
+     int idx, ret;

       /*
        * Check conditions before entering the guest
@@ -249,7 +249,9 @@ static int kvm_enter_guest_check(struct kvm_vcpu *vcpu)
       if (ret < 0)
               return ret;

+     idx = srcu_read_lock(&vcpu->kvm->srcu);
       ret = kvm_check_requests(vcpu);
+     srcu_read_unlock(&vcpu->kvm->srcu, idx);

       return ret;
   }

How about adding rcu readlock with closest function
kvm_update_stolen_time()?
I have considered this method before. But then I read vcpu_run() of
x86, it protect the whole vcpu_run() except  the subroutine
xfer_to_guest_mode_handle_work(), so I think protect the whole
kvm_check_requests() is more like x86.srcu_readlock is to protect memslot or io_bus region to be removed and
freed.

Both is ok for me, it up to you.

Regards
Bibo Mao

Huacai


   static int kvm_check_requests(struct kvm_vcpu *vcpu)
   {
+       int idx;
+
          if (!kvm_request_pending(vcpu))
                  return RESUME_GUEST;

@@ -213,8 +215,11 @@ static int kvm_check_requests(struct kvm_vcpu *vcpu)
          if (kvm_dirty_ring_check_request(vcpu))
                  return RESUME_HOST;

-       if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
+       if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu)) {
+               idx = srcu_read_lock(&vcpu->kvm->srcu);
                  kvm_update_stolen_time(vcpu);
+               srcu_read_unlock(&vcpu->kvm->srcu, idx);
+       }

          return RESUME_GUEST;
   }

Both method look good to me.

Regards
Bibo Mao






[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux