Re: [Regression] Linux-Next Merge 25Jul2018 breaks mmc on Tegra.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






Adrian, Ulf Hansson and anyone, could you take a look at the warning of WARN_ON(host->cmd) in sdhci_send_command()? Seems you only allow to queue one command, but not sure how you
guarantee that.

We didn't guarantee it, but it didn't happen before "blk-mq: issue directly
if hw queue isn't busy in case of 'none'".

OK, thanks for clarifying that, and as I mentioned what matters is the
timing change.

We did consider adding a mutex, refer
https://lore.kernel.org/lkml/CAPDyKFr8tiJXSL-weQjGJ3DfRrfv8ZAFY8=ZECLNgSe_43S8Rw@xxxxxxxxxxxxxx/

However the following might do, do you think?

If dispatch in parallel isn't supported, just wondering why not set hw
queue depth as 1? That way should be simple to fix this issue.

First, it isn't 1.  It is 2 for devices with no command queue because we prepare a request while the previous one completes.  Otherwise it is the
command queue depth.

Could you share where the prepare function is? What does the prepare
function do?

The prepare function is mmc_pre_req().

The prepare function is to let the host controller DMA map the request.  On
some architectures that is sufficiently slow that there is a significant
performance benefit to doing that in advance.


If the device has no command queue, I understand there is only one
command which can be queued/issued to controller. If that is true, the queue
depth should be 1.


Secondly, we expect an elevator to be used, and the elevator needs a decent
number of requests to work with, and the default number of requests is
currently tied to the queue depth.

There are two queue depth, we are talking about hw queue depth, which
means how many commands the controller can queue at most. The other one
is scheduler queue depth, which is 2 times of hw queue depth at default.

We may improve this case a bit, now if hw queue depth is 1, the scheduler queue depth is set as 2 at default, but this default depth isn't good, since
legacy sets scheduler queue depth as 128, even though the minimized
depth is 4.





diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 648eb6743ed5..6edffeed9953 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -238,10 +238,6 @@ static void mmc_mq_exit_request(struct blk_mq_tag_set *set, struct request *req,
      mmc_exit_request(mq->queue, req);
  }
-/*
- * We use BLK_MQ_F_BLOCKING and have only 1 hardware queue, which means requests
- * will not be dispatched in parallel.
- */
  static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
                      const struct blk_mq_queue_data *bd)
  {
@@ -264,7 +260,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
      spin_lock_irq(q->queue_lock);
-    if (mq->recovery_needed) {
+    if (mq->recovery_needed || mq->busy) {
          spin_unlock_irq(q->queue_lock);
          return BLK_STS_RESOURCE;
      }
@@ -291,6 +287,9 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
          break;
      }
+    /* Parallel dispatch of requests is not supported at the moment */
+    mq->busy = true;
+
      mq->in_flight[issue_type] += 1;
      get_card = (mmc_tot_in_flight(mq) == 1);
      cqe_retune_ok = (mmc_cqe_qcnt(mq) == 1);
@@ -333,9 +332,12 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
          mq->in_flight[issue_type] -= 1;
          if (mmc_tot_in_flight(mq) == 0)
              put_card = true;
+        mq->busy = false;
          spin_unlock_irq(q->queue_lock);
          if (put_card)
              mmc_put_card(card, &mq->ctx);
+    } else {
+        WRITE_ONCE(mq->busy, false);
      }
      return ret;
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 17e59d50b496..9bf3c9245075 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -81,6 +81,7 @@ struct mmc_queue {
      unsigned int        cqe_busy;
  #define MMC_CQE_DCMD_BUSY    BIT(0)
  #define MMC_CQE_QUEUE_FULL    BIT(1)
+    bool            busy;
      bool            use_cqe;
      bool            recovery_needed;
      bool            in_recovery;

Sorry, I am not familiar with mmc code, so can't comment on the above
patch.

Right, but if we return BLK_STS_RESOURCE to avoid parallel dispatch, do we
need to worry about ensuring the queue gets run later?

Yeah, blk-mq can cover that.

Peter, could you test if the diff I sent also fixes your original regression?


Good Morning,
I apologize for the delay in returning the log with the latest debug patch, but I encountered a new bug with the io scheduler enabled that I was trying to work around. Apparently with the io scheduler enabled, while it got rid of the warnings, I was still getting cache corruption after a period of time. This manifested itself as ENOMEM errors whenever accessing files that had been recently written, and those files showed up as ??????? when ls -al was run. After a reboot those files were available without issue, until written again.

I have attached the log from the latest test.

Break.

Adrian,
Your patch appears to have corrected the errors, and it boots without issue. I will have to run an extended test to ensure the bug I just mentioned above does not manifest either.

Thanks everyone!

Adrian,

Long term testing appears to show this bug as resolved by your patch.

If you submit it, please respond with the commit and I'll attach it to the bug report as the resolution.


--
To unsubscribe from this list: send the line "unsubscribe linux-tegra" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux