On Thu, Jan 09, 2020 at 11:55:12AM +0000, John Garry wrote: > On 09/12/2019 10:10, Sumit Saxena wrote: > > On Mon, Dec 2, 2019 at 9:09 PM Hannes Reinecke <hare@xxxxxxx> wrote: > > > > > > Fusion adapters can steer completions to individual queues, and > > > we now have support for shared host-wide tags. > > > So we can enable multiqueue support for fusion adapters and > > > drop the hand-crafted interrupt affinity settings. > > > > Hi Hannes, > > > > Ming Lei also proposed similar changes in megaraid_sas driver some > > time back and it had resulted in performance drop- > > https://patchwork.kernel.org/patch/10969511/ > > > > So, we will do some performance tests with this patch and update you. > > > > Hi Sumit, > > I was wondering if you had a chance to do this test yet? > > It would be good to know, so we can try to progress this work. Looks most of the comment in the following link isn't addressed: https://lore.kernel.org/linux-block/20191129002540.GA1829@ming.t460p/ > Firstly too much((nr_hw_queues - 1) times) memory is wasted. Secondly IO > latency could be increased by too deep scheduler queue depth. Finally CPU > could be wasted in the retrying of running busy hw queue. > > Wrt. driver tags, this patch may be worse, given the average limit for > each LUN is reduced by (nr_hw_queues) times, see hctx_may_queue(). > > Another change is bt_wait_ptr(). Before your patches, there is single > .wait_index, now the number of .wait_index is changed to nr_hw_queues. > > Also the run queue number is increased a lot in SCSI's IO completion, see > scsi_end_request(). I guess memory waste won't be a blocker. But it may not be one accepted behavior to reduce average active queue depth for each LUN by nr_hw_queues times, meantime scheduler queue depth is increased by nr_hw_queues times, compared with single queue. thanks, Ming