> -----Original Message----- > From: Christoph Hellwig [mailto:hch@xxxxxx] > Sent: Tuesday, July 11, 2017 7:28 PM > To: Shivasharan S > Cc: linux-scsi@xxxxxxxxxxxxxxx; martin.petersen@xxxxxxxxxx; > thenzl@xxxxxxxxxx; jejb@xxxxxxxxxxxxxxxxxx; > kashyap.desai@xxxxxxxxxxxx; sumit.saxena@xxxxxxxxxxxx; > hare@xxxxxxxx; hch@xxxxxx > Subject: Re: [PATCH v2 11/15] megaraid_sas: Set device queue_depth same as > HBA can_queue value in scsi-mq mode > > On Wed, Jul 05, 2017 at 05:00:25AM -0700, Shivasharan S wrote: > > Currently driver sets default queue_depth for VDs at 256 and JBODs > > based on interface type, ie., for SAS JBOD QD will be 64, for SATA JBOD QD > will be 32. > > During performance runs with scsi-mq enabled, we are seeing better > > results by setting QD same as HBA queue_depth. > > Please no scsi-mq specifics. just do this unconditionally. Chris - Intent for mq specific check is mainly because of sequential work load for HDD is having penalty due to mq scheduler issue. We did this exercise prior to mq-deadline support. Making generic change for non-mq and mq was good, but we may see some user may not like to see regression. E.a In case of, QD = 32 for SATA PD file system creation may be faster compare to large QD. There may be a soft merger at block layer due to queue depth throttling. Eventually, FS creation goes fast due to IO merges, but same will not be true if we change queue depth logic (means, increase device queue depth to HBA QD.) We have choice to completely remove this patch and ask users to do sysfs settings in case of scsi-mq performance issue for HDD sequential work load. Having this patch, we want to provide better QD settings as default from driver. Thanks, Kashyap