Hi,
在 2022/11/25 20:33, John Garry 写道:
On 24/11/2022 03:45, Yu Kuai wrote:
Hi,
While upgrading kernel from 4.19 to 5.10, I found that fio 1 thread 4k
sequential io performance is dropped(160Mib -> 100 Mib), root cause is
that queue_depth is changed from 64 to 256.
commit 6e73550670ed1c07779706bb6cf61b99c871fc42
scsi: megaraid_sas: Update optimal queue depth for SAS and NVMe devices
diff --git a/drivers/scsi/megaraid/megaraid_sas.h
b/drivers/scsi/megaraid/megaraid_sas.h
index bd8184072bed..ddfbe6f6667a 100644
--- a/drivers/scsi/megaraid/megaraid_sas.h
+++ b/drivers/scsi/megaraid/megaraid_sas.h
@@ -2233,9 +2233,9 @@ enum MR_PD_TYPE {
/* JBOD Queue depth definitions */
#define MEGASAS_SATA_QD 32
-#define MEGASAS_SAS_QD 64
+#define MEGASAS_SAS_QD 256
#define MEGASAS_DEFAULT_PD_QD 64
-#define MEGASAS_NVME_QD 32
+#define MEGASAS_NVME_QD 64
And with the default nr_requests 256, 256 queue_depth will make the
elevator has no effect, specifically io can't be merged in this test
case. Hence it doesn't make sense to me to set default queue_depth to
256.
Is there any reason why MEGASAS_SAS_QD is changed to 64?
Thanks,
Kuai
Which type of drive do you use?
SAS SSDs
BTW, I also test with nvme as well, the default elevator is deadline and
queue_depth seems too small, and performance is far from optimal.
Current default values don't seem good to me... 😒
Thanks,
Kuai
JFYI, in case missed, there was this discussion on SCSI queue depth a
while ago:
https://lore.kernel.org/linux-scsi/4b50f067-a368-2197-c331-a8c981f5cd02@xxxxxxxxxx/
Thanks,
John
.