Re: [PATCH 1/1] ufs: core: fix &hwq->cq_lock deadlock issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 4/21/2023 3:56 PM, Alice Chao wrote:
[name:lockdep&]WARNING: inconsistent lock state
[name:lockdep&]--------------------------------
[name:lockdep&]inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
[name:lockdep&]kworker/u16:4/260 [HC0[0]:SC0[0]:HE1:SE1] takes:
   ffffff8028444600 (&hwq->cq_lock){?.-.}-{2:2}, at:
ufshcd_mcq_poll_cqe_lock+0x30/0xe0
[name:lockdep&]{IN-HARDIRQ-W} state was registered at:
   lock_acquire+0x17c/0x33c
   _raw_spin_lock+0x5c/0x7c
   ufshcd_mcq_poll_cqe_lock+0x30/0xe0
   ufs_mtk_mcq_intr+0x60/0x1bc [ufs_mediatek_mod]
   __handle_irq_event_percpu+0x140/0x3ec
   handle_irq_event+0x50/0xd8
   handle_fasteoi_irq+0x148/0x2b0
   generic_handle_domain_irq+0x4c/0x6c
   gic_handle_irq+0x58/0x134
   call_on_irq_stack+0x40/0x74
   do_interrupt_handler+0x84/0xe4
   el1_interrupt+0x3c/0x78
<snip>

Possible unsafe locking scenario:
        CPU0
        ----
   lock(&hwq->cq_lock);
   <Interrupt>
     lock(&hwq->cq_lock);
   *** DEADLOCK ***
2 locks held by kworker/u16:4/260:

[name:lockdep&]
  stack backtrace:
CPU: 7 PID: 260 Comm: kworker/u16:4 Tainted: G S      W  OE
6.1.17-mainline-android14-2-g277223301adb #1
Workqueue: ufs_eh_wq_0 ufshcd_err_handler

  Call trace:
   dump_backtrace+0x10c/0x160
   show_stack+0x20/0x30
   dump_stack_lvl+0x98/0xd8
   dump_stack+0x20/0x60
   print_usage_bug+0x584/0x76c
   mark_lock_irq+0x488/0x510
   mark_lock+0x1ec/0x25c
   __lock_acquire+0x4d8/0xffc
   lock_acquire+0x17c/0x33c
   _raw_spin_lock+0x5c/0x7c
   ufshcd_mcq_poll_cqe_lock+0x30/0xe0
   ufshcd_poll+0x68/0x1b0
   ufshcd_transfer_req_compl+0x9c/0xc8
   ufshcd_err_handler+0x3bc/0xea0
   process_one_work+0x2f4/0x7e8
   worker_thread+0x234/0x450
   kthread+0x110/0x134
   ret_from_fork+0x10/0x20

ufs_mtk_mcq_intr() could refer to
https://lore.kernel.org/all/20230328103423.10970-3-powen.kao@xxxxxxxxxxxx/

When ufshcd_err_handler() is executed, CQ event interrupt can enter
waiting for the same lock. It could happened in upstream code path
ufshcd_handle_mcq_cq_events() and also in ufs_mtk_mcq_intr(). This
warning message will be generated when &hwq->cq_lock is used in IRQ
context with IRQ enabled. Use ufshcd_mcq_poll_cqe_lock() with
spin_lock_irqsave instead of spin_lock to resolve the deadlock issue.

Signed-off-by: Alice Chao <alice.chao@xxxxxxxxxxxx>
---
  drivers/ufs/core/ufs-mcq.c | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
index 31df052fbc41..202ff71e1b58 100644
--- a/drivers/ufs/core/ufs-mcq.c
+++ b/drivers/ufs/core/ufs-mcq.c
@@ -299,11 +299,11 @@ EXPORT_SYMBOL_GPL(ufshcd_mcq_poll_cqe_nolock);
  unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba,
  				       struct ufs_hw_queue *hwq)
  {
-	unsigned long completed_reqs;
+	unsigned long completed_reqs, flags;
- spin_lock(&hwq->cq_lock);
+	spin_lock_irqsave(&hwq->cq_lock, flags);
  	completed_reqs = ufshcd_mcq_poll_cqe_nolock(hba, hwq);
-	spin_unlock(&hwq->cq_lock);
+	spin_unlock_irqrestore(&hwq->cq_lock, flags);
return completed_reqs;
  }

Reviewed-by: Can Guo <quic_cang@xxxxxxxxxxx>


Thanks for the fix.


Regards,

Can Guo.




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux