Why is MEGASAS_SAS_QD set to 256?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

While upgrading kernel from 4.19 to 5.10, I found that fio 1 thread 4k
sequential io performance is dropped(160Mib -> 100 Mib), root cause is
that queue_depth is changed from 64 to 256.

commit 6e73550670ed1c07779706bb6cf61b99c871fc42
scsi: megaraid_sas: Update optimal queue depth for SAS and NVMe devices

diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
index bd8184072bed..ddfbe6f6667a 100644
--- a/drivers/scsi/megaraid/megaraid_sas.h
+++ b/drivers/scsi/megaraid/megaraid_sas.h
@@ -2233,9 +2233,9 @@ enum MR_PD_TYPE {

 /* JBOD Queue depth definitions */
 #define MEGASAS_SATA_QD        32
-#define MEGASAS_SAS_QD 64
+#define MEGASAS_SAS_QD 256
 #define MEGASAS_DEFAULT_PD_QD  64
-#define MEGASAS_NVME_QD                32
+#define MEGASAS_NVME_QD        64


And with the default nr_requests 256, 256 queue_depth will make the
elevator has no effect, specifically io can't be merged in this test
case. Hence it doesn't make sense to me to set default queue_depth to
256.

Is there any reason why MEGASAS_SAS_QD is changed to 64?

Thanks,
Kuai




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux