On 11/3/2021 10:06 AM, Bart Van Assche wrote:
On 11/3/21 12:46 AM, Adrian Hunter wrote:
On 02/11/2021 22:49, Bart Van Assche wrote:
static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba)
{
- #define DOORBELL_CLR_TOUT_US (1000 * 1000) /* 1 sec */
int ret = 0;
+
/*
- * make sure that there are no outstanding requests when
- * clock scaling is in progress
+ * Make sure that there are no outstanding requests while clock
scaling
+ * is in progress. Since the error handler may submit TMFs,
limit the
+ * time during which to block hba->tmf_queue in order not to
block the
+ * UFS error handler.
+ *
+ * Since ufshcd_exec_dev_cmd() and
ufshcd_issue_devman_upiu_cmd() lock
+ * the clk_scaling_lock before calling blk_get_request(), lock
+ * clk_scaling_lock before freezing the request queues to prevent a
+ * deadlock.
*/
- ufshcd_scsi_block_requests(hba);
How are requests from LUN queues blocked?
I will add blk_freeze_queue() calls for the LUNs.
Thanks,
Bart.
Hi Bart,
In the current clock scaling code, the expectation is to scale up as
soon as possible.
For e.g. say, the current gear is G1 and there're pending requests in
the queue but the DBR is empty and there's a decision to scale up.
During scale-up, if the queues are frozen, wouldn't those requests be
issued to the driver and executed in G1 instead of G4?
I think this would lead to higher run to run variance in performance.
What do you think?
Thanks,
-asd
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
Linux Foundation Collaborative Project