On 12/7/2023 2:38 PM, Manivannan Sadhasivam wrote:
On Mon, Nov 27, 2023 at 03:19:49PM +0800, Qiang Yu wrote:
On 11/24/2023 6:09 PM, Manivannan Sadhasivam wrote:
On Tue, Nov 14, 2023 at 01:27:41PM +0800, Qiang Yu wrote:
From: Hemant Kumar <quic_hemantk@xxxxxxxxxxx>
If CONFIG_TRACE_IRQFLAGS is enabled, irq will be enabled once __local_bh_
enable_ip is called as part of write_unlock_bh. Hence, let's take irqsave
"__local_bh_enable_ip" is a function name, so you should not break it.
Thanks for let me know, will note this in following patch.
lock after TRE is generated to avoid running write_unlock_bh when irqsave
lock is held.
I still don't understand this commit message. Where is the write_unlock_bh()
being called?
- Mani
Write_unlock_bh() is invoked in mhi_gen_te()
The calling flow is like
mhi_queue
read_lock_irqsave(&mhi_cntrl->pm_lock, flags)
mhi_gen_tre
write_lock_bh(&mhi_chan->lock)
write_unlock_bh(&mhi_chan->lock) // Will enable irq if
CONFIG_TRACE_IRQFLAGS is enabled
read_lock_irqsave(&mhi_cntrl->pm_lock, flags)
after adding this patch, the calling flow becomes
mhi_queue
mhi_gen_tre
write_lock_bh(&mhi_chan->lock)
write_unlock_bh(&mhi_chan->lock)
read_lock_irqsave(&mhi_cntrl->pm_lock, flags)
read_lock_irqsave(&mhi_cntrl->pm_lock, flags)
So this patch essentially fixes the issue caused by patch 1? If so, this should
be squashed into patch 1.
- Mani
Yes, this patch is to fix the issue caused by patch 1. Will squash patch
1 and this patch into one patch
in next version.
Signed-off-by: Hemant Kumar <quic_hemantk@xxxxxxxxxxx>
Signed-off-by: Lazarus Motha <quic_lmotha@xxxxxxxxxxx>
Signed-off-by: Qiang Yu <quic_qianyu@xxxxxxxxxxx>
---
drivers/bus/mhi/host/main.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
index 33f27e2..d7abd0b 100644
--- a/drivers/bus/mhi/host/main.c
+++ b/drivers/bus/mhi/host/main.c
@@ -1128,17 +1128,15 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
return -EIO;
- read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
-
ret = mhi_is_ring_full(mhi_cntrl, tre_ring);
- if (unlikely(ret)) {
- ret = -EAGAIN;
- goto exit_unlock;
- }
+ if (unlikely(ret))
+ return -EAGAIN;
ret = mhi_gen_tre(mhi_cntrl, mhi_chan, buf_info, mflags);
if (unlikely(ret))
- goto exit_unlock;
+ return ret;
+
+ read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
/* Packet is queued, take a usage ref to exit M3 if necessary
* for host->device buffer, balanced put is done on buffer completion
@@ -1158,7 +1156,6 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
if (dir == DMA_FROM_DEVICE)
mhi_cntrl->runtime_put(mhi_cntrl);
-exit_unlock:
read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
return ret;
--
2.7.4